Web Survey Bibliography
Probability-based sampling is the survey researcher’s most reliable method for making population estimates when only data from a sample is being used. Non-probability samples are considered less reliable with presumed biased estimates due to their convenient, non-representative construction. In the realm of Web surveys, a representative study sample, drawn from a probability-based Web panel (such as KnowledgePanel®), after post-stratification weighting, will produce reliable, generalizable unbiased study estimates. However, there are instances when too few Web panel members are available to meet minimum sample size requirements due to the finite size of the panel. In such unique situations, a supplemental sample from a non-probability opt-in Web panel may be added to satisfy sample size targets. First, this paper will show that when both samples are profiled with questions on early adopter (EA) attitudes, non-probability opt-in samples tend to have proportionally more EA characteristics compared to probability samples. This finding is consistent over different demographic groups. Second, taking advantage of these EA differences, this paper describes a statistical technique for calibrating opt-in cases blended with probability-based cases using these EA characteristics. Successful results from different studies will be demonstrated. Additionally, in order to quantify the benefits of calibration, using, for example, data from one probability sample (n=611) and one opt-in sample (n=750), a reduction in the average mean squared error from 3.8 to 1.8 can be achieved with calibration. The average estimated bias is also reduced from 2.056 to 0.064. Other examples will be presented. Knowledge Networks believes that this calibration approach is a viable methodology for combining probability and non-probability Web panel samples. It is also a relatively efficient procedure that serves projects with rapid data turnaround
requirements.
Conference Homepage (abstract)
Web survey bibliography - Cobb, C. L. (7)
- Watch Your Language!: The Impact of the Survey Language on Bilingual Hispanics’ Response Process...; 2013; Ay, M., Gross, W., Cobb, C. L., Thomas, R. K.
- Impact of Filter Questions on Estimates of Media Consumption; 2013; Cobb, C. L., Godinez, D., Thomas, R. K., Baim, J.
- How Far Have We Come? The Lingering Digital Divide and Its Impact on the Representativeness of Internet...; 2013; Dennis, J. M., Cobb, C. L.
- Effects of Response Format on Measurement of Readership; 2013; Thomas, R. K., Cobb, C. L., Baim, J.
- Using Probability-based On-line Samples to Calibrate Non-probability Opt-in Samples; 2012; DiSogra, C., Cobb, C. L., Chan, E., Dennis, J. M.
- Observed Differences in the Placement and Wording of Neutral Response Options in Web Surveys: An Experiment...; 2011; Walton, L., Cobb, C. L., DiSogra, C.
- Calibrating Non-Probability Internet Samples with Probability Samples Using Early Adopter Characteristics...; 2011; DiSogra, C., Cobb, C. L., Chan, E., Dennis, J. M.